Skip to content

Qwen FP8 ModelOPT support #20734

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 0 commits into from

Conversation

jingyu-ml
Copy link

@jingyu-ml jingyu-ml commented Jul 10, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Add the QwQ-32B/Qwen2.5/Qwen3/Qwen3-MoE FP8 support from ModelOPT.
The FP8 ckpt can be generated using ModelOPT's example: https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples/llm_ptq.

Tested on QwQ-32B/Qwen2.5-14B/Qwen3-1.7B/Qwen3-30B-A3B

Test Plan

Using this script to test it:

from vllm import LLM, SamplingParams

def main():
    # model_id = "QwQ-32B"
    # model_id = "Qwen2.5-14B"
    # model_id = "Qwen3-1.7B"
    model_id = "Qwen3-30B-A3B"
    sampling_params = SamplingParams(temperature=0.8, top_p=0.9)

    prompts = [
        "Hello, my name is",
        "The president of the United States is",
        "The capital of France is",
        "The future of AI is",
    ]

    llm = LLM(model=model_id, quantization="modelopt")
    outputs = llm.generate(prompts, sampling_params)

    for output in outputs:
        print(f"Prompt: {output.prompt!r}, Generated text: {output.outputs[0].text!r}")

if __name__ == "__main__":
    main()

Test Result

Prompt: 'Hello, my name is', Generated text: ' Chris. I need to make a form with a custom submit button. The problem'
Prompt: 'The president of the United States is', Generated text: ' not allowed to be a foreign national. The president must be a natural-born citizen'
Prompt: 'The capital of France is', Generated text: ' Paris. The capital of the United States is Washington, D.C. What is'
Prompt: 'The future of AI is', Generated text: ' about the emergence of super-intelligence, but the problem is that humans are not'

(Optional) Documentation Update

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@jingyu-ml jingyu-ml marked this pull request as draft July 10, 2025 05:14
@mergify mergify bot added llama Related to Llama models qwen Related to Qwen models labels Jul 10, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @jingyu-ml, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands vLLM's quantization capabilities by integrating full support for ModelOPT FP8 quantized models, with a particular focus on Qwen models. It introduces new quantization methods for linear and Mixture-of-Experts (MoE) layers, refines the weight loading process to accommodate ModelOPT's specific checkpoint formats and scale handling, and includes model-specific adaptations to ensure seamless loading and execution of these quantized models. The changes aim to broaden the range of supported models and improve loading efficiency for quantized architectures.

Highlights

  • ModelOPT FP8 Quantization Support: Added comprehensive support for ModelOPT FP8 quantization, enabling vLLM to load and run models quantized with this method. This includes identifying ModelOPT FP8 from Hugging Face configurations and handling its specific data formats.
  • Mixture-of-Experts (MoE) Layer Quantization: Introduced ModelOptFp8MoEMethod to specifically handle FP8 quantization for MoE layers. This new method manages the creation of FP8 weights and scales, processes scales after loading (e.g., combining w1 and w3 scales), and applies the quantization during the forward pass.
  • Flexible Weight Loading Enhancements: Improved the weight loading mechanism across various models to dynamically adapt to different ModelOPT weight scale patterns (e.g., weight_scale vs. weight_scale_2) and to correctly handle complex expert scale broadcasting from single values to multiple experts. This includes updates to llama4, mllama4, qwen2, and qwen3_moe.
  • Performance Optimizations for MoE: Introduced supports_moe_loading flags to relevant weight loaders (FusedMoE, experts_int8, moe_wna16) to explicitly mark them as MoE-aware. This optimization helps avoid expensive runtime reflection during model loading, improving efficiency for MoE architectures.
  • KV Cache Scale Remapping: Enhanced the maybe_remap_kv_scale_name utility to correctly remap KV cache scale names, including support for qkv_proj parameters, ensuring accurate loading of attention-related scales in quantized models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Qwen models with FP8 quantization using ModelOPT. The changes span several files, including configuration, model layers, and weight loading utilities. Key additions include logic to parse ModelOPT-specific quantization configurations and a new ModelOptFp8MoEMethod for handling MoE layers. Overall, the changes are well-structured to support the new quantization format.

Copy link

mergify bot commented Jul 12, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @jingyu-ml.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 12, 2025
@mergify mergify bot removed the needs-rebase label Jul 30, 2025
@jingyu-ml jingyu-ml marked this pull request as ready for review July 30, 2025 21:31
@jingyu-ml jingyu-ml requested a review from sighingnow as a code owner July 30, 2025 21:31
@jingyu-ml jingyu-ml force-pushed the jingyux/dev-qwen-fp8 branch from 8afa3fa to 1bf1f84 Compare July 30, 2025 22:14
@jingyu-ml jingyu-ml marked this pull request as draft July 30, 2025 22:33
@jingyu-ml jingyu-ml closed this Jul 30, 2025
@jingyu-ml jingyu-ml force-pushed the jingyux/dev-qwen-fp8 branch from 1bf1f84 to ca9e2be Compare July 30, 2025 22:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build llama Related to Llama models qwen Related to Qwen models v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant